9 research outputs found

    Quantifying impact on safety from cyber-attacks on cyber-physical systems

    Full text link
    We propose a novel framework for modelling attack scenarios in cyber-physical control systems: we represent a cyber-physical system as a constrained switching system, where a single model embeds the dynamics of the physical process, the attack patterns, and the attack detection schemes. We show that this is compatible with established results in the analysis of hybrid automata, and, specifically, constrained switching systems. Moreover, we use the developed models to compute the impact of cyber attacks on the safety properties of the system. In particular, we characterise system safety as an asymptotic property, by calculating the maximal safe set. The resulting new impact metrics intuitively quantify the degradation of safety under attack. We showcase our results via illustrative examples.Comment: 8 pages, 5 figures, submitted for presentation to IFAC World Congress 2023, Yokohama, JAPA

    Distributed Sequential Receding Horizon Control of Multi-Agent Systems under Recurring Signal Temporal Logic

    Full text link
    We consider the synthesis problem of a multi-agent system under Signal Temporal Logic (STL) specifications representing bounded-time tasks that need to be satisfied recurrently over an infinite horizon. Motivated by the limited approaches to handling recurring STL systematically, we tackle the infinite-horizon control problem with a receding horizon scheme equipped with additional STL constraints that introduce minimal complexity and a backward-reachability-based terminal condition that is straightforward to construct and ensures recursive feasibility. Subsequently, assuming a separable performance index, we decompose the global receding horizon optimization problem defined at the multi-agent level into local programs at the individual-agent level the objective of which is to minimize the local cost function subject to local and joint STL constraints. We propose a scheduling policy that allows individual agents to sequentially optimize their control actions while maintaining recursive feasibility. This results in a distributed strategy that can operate online as a model predictive controller. Last, we illustrate the effectiveness of our method via a multi-agent system example assigned a surveillance task.Comment: submitted to ECC2

    AIMD-inspired switching control of computing networks

    No full text
    We consider the scheduling problem of requests entering a distributed computing network consisting of a set of non-cooperative nodes, where a node is represented by a queue combined with a computing unit. Our interaction-free setup between nodes renders decentralised scheduling challenging, with most existing results focusing on centralised or static solutions. Inspired by congestion control, we propose a new average-based additive increase multiplicative decrease (AIMD) admission control policy, which requires minimal communication between individual nodes and an aggregator. The proposed admission policy infers a discrete-event model expressed as a positive constrained switching system that is triggered whenever the queue of the aggregation point of requests vanishes. We show convergence of the proposed AIMD system under unknown, peak-bounded workload profiles by analysing the spectrum of rank-one perturbations of symmetric matrices and the boundedness of the joint spectral radius of sets of symmetric matrices. Contrary to methods that address scheduling and resource allocation asynchronously or via a two-step approach, our AIMD-based scheme can tackle both tasks simultaneously. This is illustrated by proposing a decentralised resource allocation controller coupled with the scheduling scheme leading to a stable closed-loop control system that is guaranteed to avoid underutilisation of resources and is tunable via the sets of AIMD parameters

    Optimal resource scheduling and allocation in distributed computing systems

    No full text
    The essence of distributed computing systems is how to schedule incoming requests and how to allocate all computing nodes to minimize both time and computation costs. In this paper, we propose a cost-aware optimal scheduling and allocation strategy for distributed computing systems while minimizing the cost function including response time and service cost. First, based on the proposed cost function, we derive the optimal request scheduling policy and the optimal resource allocation policy synchronously. Second, considering the effects of incoming requests on the scheduling policy, the additive increase multiplicative decrease (AIMD) mechanism is implemented to model the relation between the request arrival and scheduling. In particular, the AIMD parameters can be designed such that the derived optimal strategy is still valid. Finally, a numerical example is presented to illustrate the derived results.Comment: This work has been submitted to ACC202

    Distributed resource autoscaling in Kubernetes edge clusters

    No full text
    Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting specific application requirements subject to a dynamic workload profile of incoming requests is extremely challenging. To this end, the fundamental problems of task scheduling and resource autoscaling must be jointly addressed. This paper presents a scalable architecture compatible with the decentralized nature of Kubernetes, to solve both. Exploiting the stability guarantees of a novel AIMD-like task scheduling solution, we dynamically redirect the incoming requests towards the containerized application. To cope with dynamic workloads, a prediction mechanism allows us to estimate the number of incoming requests. Additionally, a Machine Learning-based (ML) Application Profiling Modeling is introduced to address the scaling, by co-designing the theoretically-computed service rates obtained from the AIMD algorithm with the current performance metrics. The proposed solution is compared with the state-of-the-art autoscaling techniques under a realistic dataset in a small edge infrastructure and the trade-off between resource utilization and QoS violations are analyzed. Our solution provides better resource utilization by reducing CPU cores by 8% with only an acceptable increase in QoS violations
    corecore